Multi-Task Learning for Sequential Data

نویسندگان

  • Ya Xue
  • Shihao Ji
چکیده

The problem of multi-task learning (MTL) is considered for sequential data, such as that typically modeled via a hidden Markov model (HMM). A given task is composed of a set of sequential data, for which an HMM is to be learned, and MTL is employed to learn the multiple task-dependent HMMs jointly, through appropriate sharing of data. The HMM-MTL formulation is implemented in a Bayesian setting, by utilizing a common prior on the cross-task HMM parameters. The prior is characterized in a nonparametric manner, utilizing a Dirichlet process (DP), and a variational Bayes (VB) formulation is employed for efficient inference. The DP-based HMM-MTL formulation is demonstrated using both synthesized and real sequential data, wherein the MTL formulation is demonstrated to yield improved performance relative to single-task learning, for cases in which at least a subset of the tasks are related, with this task relatedness determined automatically by the algorithm.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Comparing Bandwidth and Self-control Modeling on Learning a Sequential Timing Task

Modeling is a process which the observer sees another person's behavior and adapts his/her behavior with that which is the result of interaction. The aim of present study was to investigate and compare effectiveness of bandwidth modeling and self-control modeling on performance and learning of a sequential timing task. So two groups of bandwidth and self-control were compared. The task was pres...

متن کامل

Sequential Inference for Deep Gaussian Process

A deep Gaussian process (DGP) is a deep network in which each layer is modelled with a Gaussian process (GP). It is a flexible model that can capture highly-nonlinear functions for complex data sets. However, the network structure of DGP often makes inference computationally expensive. In this paper, we propose an efficient sequential inference framework for DGP, where the data is processed seq...

متن کامل

Online Multi-Task Learning Using Active Sampling

One of the long-standing challenges in Artificial Intelligence for goal-directed behavior is to build a single agent which can solve multiple tasks. Recent progress in multi-task learning for goal-directed sequential tasks has been in the form of distillation based learning wherein a single student network learns from multiple task-specific expert networks by mimicking the task-specific policie...

متن کامل

Classifying Objects at Different Sizes with Multi-Scale Stacked Sequential Learning

Sequential learning is that discipline of machine learning that deals with dependent data. In this paper, we use the Multi-scale Stacked Sequential Learning approach (MSSL) to solve the task of pixel-wise classification based on contextual information. The main contribution of this work is a shifting technique applied during the testing phase that makes possible, thanks to template images, to c...

متن کامل

Online Multi-Task Learning Using Biased Sampling

One of the long-standing challenges in Artificial Intelligence for goal-directed behavior is to build a single agent which can solve multiple tasks. Recent progress in multi-task learning for learning behavior in many goal-directed sequential tasks has been in the form of distillation based learning wherein a single student network learns from multiple task-specific teacher networks by mimickin...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007